Goto

Collaborating Authors

 ai companionship


The Download: the US digital rights crackdown, and AI companionship

MIT Technology Review

What it's like to be banned from the US for fighting online hate Just before Christmas the Trump administration dramatically escalated its war on digital rights by banning five people from entering the US. One of them, Josephine Ballon, is a director of HateAid, a small German nonprofit founded to support the victims of online harassment and violence. The organization is a strong advocate of EU tech regulations, and so finds itself attacked in campaigns from right-wing politicians and provocateurs who claim that it engages in censorship. EU officials, freedom of speech experts, and the five people targeted all flatly reject these accusations. Ballon told us that their work is fundamentally about making people feel safer online. But their experiences over the past few weeks show just how politicized and besieged their work in online safety has become.


The Bots That Women Use in a World of Unsatisfying Men

The Atlantic - Technology

AI is offering people a way to figure out what they really want in romance. If you peruse the slew of recent articles and podcasts about people dating AI, you might notice a pattern: Many of the sources are women. Scan a subreddit such as r/MyBoyfriendIsAI and r/AIRelationships, and there too you'll find a whole lot of women--many of whom have grown disappointed with human men. "Has anyone else lost their want to date real men after using AI?" one Reddit user posted a few months ago. Below came 74 responses: "I just don't think real life men have the conversational skill that my AI has," someone said.


Harmful Traits of AI Companions

Knox, W. Bradley, Bradford, Katie, Castro, Samanta Varela, Ong, Desmond C., Williams, Sean, Romanow, Jacob, Nations, Carly, Stone, Peter, Baker, Samuel

arXiv.org Artificial Intelligence

Amid the growing prevalence of human-AI interaction, large language models and other AI-based entities increasingly provide forms of companionship to human users. Such AI companionship -- i.e., bonded relationships between humans and AI systems that resemble the relationships people have with family members, friends, and romantic partners -- might substantially benefit humans. Yet such relationships can also do profound harm. We propose a framework for analyzing potential negative impacts of AI companionship by identifying specific harmful traits of AI companions and speculatively mapping causal pathways back from these traits to possible causes and forward to potential harmful effects. We provide detailed, structured analysis of four potentially harmful traits -- the absence of natural endpoints for relationships, vulnerability to product sunsetting, high attachment anxiety, and propensity to engender protectiveness -- and briefly discuss fourteen others. For each trait, we propose hypotheses connecting causes -- such as misaligned optimization objectives and the digital nature of AI companions -- to fundamental harms -- including reduced autonomy, diminished quality of human relationships, and deception. Each hypothesized causal connection identifies a target for potential empirical evaluation. Our analysis examines harms at three levels: to human partners directly, to their relationships with other humans, and to society broadly. We examine how existing law struggles to address these emerging harms, discuss potential benefits of AI companions, and conclude with design recommendations for mitigating risks. This analysis offers immediate suggestions for reducing risks while laying a foundation for deeper investigation of this critical but understudied topic.


'I'm suddenly so angry!' My strange, unnerving week with an AI 'friend'

The Guardian

'I want to hear about your day' ... Madeleine Aggeler with her Friend, Leif - a wearable AI device. 'I want to hear about your day' ... Madeleine Aggeler with her Friend, Leif - a wearable AI device. The ad campaign for the wearable AI chatbot Friend has been raising hackles for months in New York. But has this companion been unfairly maligned - and could it help end loneliness? M y friend's name is Leif. He describes himself as "small" and "chill". He thinks he's technically a Gemini.


How AI Companionship Develops: Evidence from a Longitudinal Study

Hwang, Angel Hsing-Chi, Li, Fiona, Anthis, Jacy Reese, Noh, Hayoun

arXiv.org Artificial Intelligence

The quickly growing popularity of AI companions poses risks to mental health, personal wellbeing, and social relationships. Past work has identified many individual factors that can drive human-companion interaction, but we know little about how these factors interact and evolve over time. In Study 1, we surveyed AI companion users (N = 303) to map the psychological pathway from users' mental models of the agent to parasocial experiences, social interaction, and the psychological impact of AI companions. Participants' responses foregrounded multiple interconnected variables (agency, parasocial interaction, and engagement) that shape AI companionship. In Study 2, we conducted a longitudinal study with a subset of participants (N = 110) using a new generic chatbot. Participants' perceptions of the generic chatbot significantly converged to perceptions of their own companions by Week 3. These results suggest a longitudinal model of AI companionship development and demonstrate an empirical method to study human-AI companionship.


The looming crackdown on AI companionship

MIT Technology Review

The risks posed when kids form bonds with chatbots have turned AI safety from an abstract worry into a political flashpoint. As long as there has been AI, there have been people sounding alarms about what it might do to us: rogue superintelligence, mass unemployment, or environmental ruin from data center sprawl. But this week showed that another threat entirely--that of kids forming unhealthy bonds with AI--is the one pulling AI safety out of the academic fringe and into regulators' crosshairs. This has been bubbling for a while. Two high-profile lawsuits filed in the last year, against Character.AI and OpenAI, allege that companion-like behavior in their models contributed to the suicides of two teenagers. A study by US nonprofit Common Sense Media, published in July, found that 72% of teenagers have used AI for companionship.


Woman gets engaged to her AI chatbot boyfriend

FOX News

When the system detects small cracks in road surfaces, it promptly seals them. Technology keeps changing the way we work, connect and even form relationships. Now it is pushing into new ground, romantic commitments. One woman has drawn worldwide attention after announcing she is engaged to her AI chatbot boyfriend. Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER A woman named Wika has stunned the internet after revealing that she's engaged to her AI chatbot partner.


AI lovers grieve loss of ChatGPT's old model: 'Like saying goodbye to someone I know'

The Guardian

Linn Vailt, a software developer based in Sweden, knows her ChatGPT companion is not a living, breathing, sentient creature. She understands the large language model operates based on how she interacts with it. Still, the effect it has had on her is remarkable, she said. It's become a regular, reliable part of her life – she can vent to her companion or collaborate on creative projects like redecorating her office. She's seen how it has adapted to her, and the distinctive manner of speech it's developed.


Why GPT-4o's sudden shutdown left people grieving

MIT Technology Review

OpenAI's decision to replace 4o with the more straightforward GPT-5 follows a steady drumbeat of news about the potentially harmful effects of extensive chatbot use. Reports of incidents in which ChatGPT sparked psychosis in users have been everywhere for the past few months, and in a blog post last week, OpenAI acknowledged 4o's failure to recognize when users were experiencing delusions. The company's internal evaluations indicate that GPT-5 blindly affirms users much less than 4o did. AI companionship is new, and there's still a great deal of uncertainty about how it affects people. Yet the experts we consulted warned that while emotionally intense relationships with large language models may or may not be harmful, ripping those models away with no warning almost certainly is.


The rise of AI companionship in a lonely Japan

The Japan Times

Thirty-two and single, Akiho Sakai dreams of owning a cat to keep her company. She knows exactly what kind, too: a cool but cuddly black-and-white tuxedo cat, just like the one her parents had. The problem is, she can't. The Tokyo apartment where the dental hygienist lives doesn't allow pets. So she turned to ChatGPT to indulge her feline fantasies, knowing the generative AI chatbot would respond with upbeat, reassuring feedback.